AI Alternative Futures: Mapping the Scenario Space of Advanced Artificial Intelligence

Description:

This project is aimed at mapping the range of alternative future scenarios from the development of advanced artificial intelligence (AI). The goal is to cover the full spectrum of AI risks and identify unique or new combinations of scenarios that receive less attention (e.g., structural risks, decision, and value erosion).
 
The survey presents AI dimensions (key factors) for each question followed by 3 to 4 conditions (scenario elements, outcomes) and asks the participant to rank each condition by its potential impact and likelihood. The goal of the survey is not for prediction but to derive specific values to outline the space of plausible futures. 
 
All questions are multiple-choice from a drop-down and will remain completely anonymous. Thank you very much for your participation!  

For further details on the project, purpose, and methodology, see here:
https://tinyurl.com/AIfutures

The frame of reference: Uncertain year between 2045 and 2065. The assumption is that AI technology has brought widespread change. It is unclear whether this progressed positively, negatively, mixed, how quickly, or to what extent. Fill in the gaps.
------------------------------------------------------------------------------------------
Details on measurement
1) Impact. Impact seeks to identify which "condition" of each dimension could have the highest positive/negative outcome. For conditions that are all negative, positive, or neutral please choose the most or least of the group. 
 
2) likelihood. For likelihood, you can think in terms of "plausibility" as these are highly uncertain conditions (Very unlikely = 5 - 20%, unlikely = 20-40%, even chance = 40-60%, likely = 60 - 80%, very likely = 80-95%)
1.System power - capability and generality
System capability (power) and generality (multiple domains of learning). From uneven capability (status quo) to transformative but limited (medium) to narrow superintelligence (AI services) to high power (AGI and superhuman).
Impact
Likelihood
Low power (status quo)
Moderate power
High power (narrow ASI)
High power (AGI-ASI)
2.Distribution of advanced AI systems
Distribution measure how wide capabilities will be distributed across society and the balance of power. Open source, only major companies, one laboratory or system. 
Impact 
Likelihood
Widely distributed 
Moderately distributed
Concentrated in one lab/system 
3.Takeoff Speed - rate of change
Slow scenario: status quo and potential AI winters; moderate (uncontrolled): transformational change that is difficult for society to normalize; moderate-fast (competitive): radical transitions, but anticipated and pursued for competitive advantage Fast takeoff is the standard hard takeoff scenario.
Impact
Likelihood
Slow takeoff (multiple decades)
Moderate  takeoff (uncontrolled)
Moderate-fast takeoff (controlled-competitive)
Fast takeoff (months, days, hours)
4.Technological paradigm for advanced AI
As high-level capabilities near, what paradigm will get us there? The current paradigm, a new learning paradigm or architecture--like quantum computing--or deep learning plus an innovation?
Impact
Likelihood
Deep learning (current paradigm)
New discovery or innovation (e.g., quantum, new insight from neuroscience) 
Deep learning, plus new innovation
5.Potential accelerants
What could accelerate AI to new capabilities quickly: a compute overhang, or bottleneck, a new insight or paradigm, or a new  data type or simulated embodiment?
Impact
Likelihood
Compute overhang or bottleneck
New insight or paradigm
Simulated embodiment or novel data type/structure
6.Timeline to Advanced AI
This is not a prediction. For timeline, please note what you think is most plausible: an unexpected hard takeoff, moderate slow moving train wreck, or incremental development over decades.
Impact
Likelihood
Over 50 years
Between 20 - 50 years
Less than 20 years
7.AI Race Dynamics
Under the assumption that AI will reach advanced capabilities, is it more plausible that countries will conduct normal market competition; become more isolated and cooperate less; aggressively pursue sector control; or accelerate into a national AI-arms race.
Impact
Likelihood
Economic cooperation increases
Inward turn toward isolationism
AI monopolies centralize control
Government-led AI "arms race"
8.Dominant risks with advanced AI systems
With the development of advanced systems, will the primary risks be from misuse such as cyber attacks, accidents and failure modes (misaligned goals), or systemic, e.g., creeping normalization and value erosion?
Impact
Likelihood
Misuse (e.g., cyber, disinformation)
Accidents or failures
Systemic risks (e.g., value erosion)
9.Primary AI Safety challenges
As advanced capabilities near, what will plausibly remain the most difficult unsolved problem? Will goal alignment remain the most intractable problem? Will deception or power-seeking? Or will explainability remain problematic (provided outer alignment is managed)?
Impact
Likelihood
Goal alignment 
Deception and influence-seeking 
Explainability 
10.AI Safety techniques with advanced systems
To control high-level systems, which of the below are more plausible? Will our current techniques scale to HLMI? Will new safety techniques need to be developed from the ground up? Or will new custom methods be required for each instantiation?
Impact
Likelihood
Current safety techniques scale to advanced AI
New techniques required for high-powered systems
Custom techniques needed for each new instantiation 
11.Developer of advanced AI
In your perspective, what entity will plausibly develop the first high-level machine intelligence: A group of countries (e.g., Eastern bloc); individual countries, powerful corporations (e.g., Google, Tencent), or individual developers?
Impact
Likelihood
International coalition 
Individual country
Corporation(s)
Individual developer
12.Developer location
Which region is it most plausible that high-level systems will be developed first?
Impact
Likelihood
USA-Western European
Asia-Pacific
Africa or Latin America/Caribbean
13.International governance structures at the time of advanced systems
By the time advanced capabilities come online, is it more plausible that there'd be a decrease in collective action, increase in international norms of safe use, international safety regimes, or even treaties and verification measure between countries?
Impact
Likelihood
Decrease in collective action due to competition or conflict
International norms on safety 
International safety regime and verification
14.Corporate governance at the time of advanced systems
By the time advanced capabilities come online, is it more plausible that leading companies will decrease cooperation due to competition, increase cooperation on safety methods, or full commitments on set standards for safe development and use. 
Impact
Likelihood
Decrease in cooperation 
Coordination on common safety techniques
Commitments on common standards
15.How familiar are you with AI safety or existential risk?(Required.)
16.Do you now or have you worked in AI safety?(Required.)
17.How familiar are you with AI governance?
18.Please leave any comments or suggestions. Thank you!